123 research outputs found

    Comparative Ellipsis and Variable Binding

    Get PDF
    In this paper, we discuss the question whether phrasal comparatives should be given a direct interpretation, or require an analysis as elliptic constructions, and answer it with Yes and No. The most adequate analysis of wide reading attributive (WRA) comparatives seems to be as cases of ellipsis, while a direct (but asymmetric) analysis fits the data for narrow scope attributive comparatives. The question whether it is a syntactic or a semantic process which provides the missing linguistic material in the complement of WRA comparatives is also given a complex answer: Linguistic context is accessed by combining a reconstruction operation and a mechanism of anaphoric reference. The analysis makes only few and straightforward syntactic assumptions. In part, this is made possible because the use of Generalized Functional Application as a semantic operation allows us to model semantic composition in a flexible way.Comment: Postscript, 15 page

    Discourse-sensitive automatic identification of generic expressions

    Get PDF
    This paper describes a novel sequence labeling method for identifying generic expressions, which refer to kinds or arbitrary members of a class, in discourse context. The automatic recognition of such expressions is important for any natural language processing task that requires text understanding. Prior work has focused on identifying generic noun phrases; we present a new corpus in which not only subjects but also clauses are annotated for genericity according to an annotation scheme motivated by semantic theory. Our contextaware approach for automatically identifying generic expressions uses conditional random fields and outperforms previous work based on local decisions when evaluated on this corpus and on related data sets (ACE-2 and ACE-2005)

    Vagueness, Ambiguity, and Underspecification

    Get PDF
    No abstract

    Situation entity types: automatic classification of clause-level aspect

    Get PDF
    This paper describes the first robust approach to automatically labeling clauses with their situation entity type (Smith, 2003), capturing aspectual phenomena at the clause level which are relevant for interpreting both semantics at the clause level and discourse structure. Previous work on this task used a small data set from a limited domain, and relied mainly on words as features, an approach which is impractical in larger settings. We provide a new corpus of texts from 13 genres (40,000 clauses) annotated with situation entity types. We show that our sequence labeling approach using distributional information in the form of Brown clusters, as well as syntactic-semantic features targeted to the task, is robust across genres, reaching accuracies of up to 76%

    A Mixture Model for Learning Multi-Sense Word Embeddings

    Full text link
    Word embeddings are now a standard technique for inducing meaning representations for words. For getting good representations, it is important to take into account different senses of a word. In this paper, we propose a mixture model for learning multi-sense word embeddings. Our model generalizes the previous works in that it allows to induce different weights of different senses of a word. The experimental results show that our model outperforms previous models on standard evaluation tasks.Comment: *SEM 201

    The Verbmobil semantic formalism (Version 1.3)

    Get PDF
    This report describes the semantic formalism developed at Saarbrücken University as part of the Verbmobil project. The formalism is based upon DRT with additional functionality to meet the requirements on semantic construction arising from spoken dialogue translation. We define the syntax of the formalism and illustrate the semantic composition process in detail
    corecore